LIP30 Day 29
With the popularity of ChatGPT growing, there are many open weights models being published by different organizations. The most notable ones are Llama 2 —Meta, Stable LM —Stability AI and Mistal—Mistal AI. These models let anyone run a ChatGPT like experience on their own hardware.
The easiest way I found to run a local LLM is to use an open source project, Ollama. It is a macOS app that you can download and run locally.
Once installed, it has a model registry which makes it easy to install and try out new models.
On the terminal, we can try out the mistral model by running
ollama run mistralIf the model is not installed, it will automatically download the model and run it.
We will try to run the Mistral 7B model. Mistral 7B is a cutting-edge language model boasting 7.3 billion parameters and have shown to be one of the most powerful language model for its size.
When I’m running it locally, it is amazingly fast on my MacBook Pro M1 Max:
Now imagine what is possible when you can use this model when you don’t have internet access.